concept bottleneck model
- Asia > Vietnam (0.04)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
- Research Report > Experimental Study (0.93)
- Instructional Material (0.87)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Transportation > Ground > Road (0.93)
- Information Technology (0.67)
- Automobiles & Trucks (0.67)
- Health & Medicine (0.67)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > France (0.04)
- (5 more...)
Relational Concept Bottleneck Models
The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept Bottleneck Models (CBMs), are not designed to solve relational problems, while relational deep learning models, such as Graph Neural Networks (GNNs), are not as interpretable as CBMs. To overcome these limitations, we propose Relational Concept Bottleneck Models (R-CBMs), a family of relational deep learning methods providing interpretable task predictions. As special cases, we show that R-CBMs are capable of both representing standard CBMs and message passing GNNs. To evaluate the effectiveness and versatility of these models, we designed a class of experimental problems, ranging from image classification to link prediction in knowledge graphs. In particular we show that R-CBMs (i) match generalization performance of existing relational black-boxes, (ii) support the generation of quantified concept-based explanations, (iii) effectively respond to test-time interventions, and (iv) withstand demanding settings including out-of-distribution scenarios, limited training data regimes, and scarce concept supervisions.
VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance
Concept Bottleneck Models (CBMs) provide interpretable prediction by introducing an intermediate Concept Bottleneck Layer (CBL), which encodes human-understandable concepts to explain models' decision. Recent works proposed to utilize Large Language Models and pre-trained Vision-Language Models to automate the training of CBMs, making it more scalable and automated. However, existing approaches still fall short in two aspects: First, the concepts predicted by CBL often mismatch the input image, raising doubts about the faithfulness of interpretation. Second, it has been shown that concept values encode unintended information: even a set of random concepts could achieve comparable test accuracy to state-of-the-art CBMs. To address these critical limitations, in this work, we propose a novel framework called Vision-Language-Guided Concept Bottleneck Model (VLG-CBM) to enable faithful interpretability with the benefits of boosted performance. Our method leverages off-the-shelf open-domain grounded object detectors to provide visually grounded concept annotation, which largely enhances the faithfulness of concept prediction while further improving the model performance. In addition, we propose a new metric called Number of Effective Concepts (NEC) to control the information leakage and provide better interpretability. Extensive evaluations across five standard benchmarks show that our method, VLG-CBM, outperforms existing methods by at least 4.27\% and up to 51.09\% on (denoted as ANEC-5), and by at least 0.45\% and up to 29.78\% on (denoted as ANEC-avg), while preserving both faithfulness and interpretability of the learned concepts as demonstrated in extensive experiments.
Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off
Deploying AI-powered systems requires trustworthy models supporting effective human interactions, going beyond raw prediction accuracy. Concept bottleneck models promote trustworthiness by conditioning classification tasks on an intermediate level of human-like concepts. This enables human interventions which can correct mispredicted concepts to improve the model's performance. However, existing concept bottleneck models are unable to find optimal compromises between high task accuracy, robust concept-based explanations, and effective interventions on concepts---particularly in real-world conditions where complete and accurate concept supervisions are scarce. To address this, we propose Concept Embedding Models, a novel family of concept bottleneck models which goes beyond the current accuracy-vs-interpretability trade-off by learning interpretable high-dimensional concept representations. Our experiments demonstrate that Concept Embedding Models (1) attain better or competitive task accuracy w.r.t.
Flexible Concept Bottleneck Model
Du, Xingbo, Dou, Qiantong, Fan, Lei, Zhang, Rui
Concept bottleneck models (CBMs) improve neural network interpretability by introducing an intermediate layer that maps human-understandable concepts to predictions. Recent work has explored the use of vision-language models (VLMs) to automate concept selection and annotation. However, existing VLM-based CBMs typically require full model retraining when new concepts are involved, which limits their adaptability and flexibility in real-world scenarios, especially considering the rapid evolution of vision-language foundation models. To address these issues, we propose Flexible Concept Bottleneck Model (FCBM), which supports dynamic concept adaptation, including complete replacement of the original concept set. Specifically, we design a hypernetwork that generates prediction weights based on concept embeddings, allowing seamless integration of new concepts without retraining the entire model. In addition, we introduce a modified sparsemax module with a learnable temperature parameter that dynamically selects the most relevant concepts, enabling the model to focus on the most informative features. Extensive experiments on five public benchmarks demonstrate that our method achieves accuracy comparable to state-of-the-art baselines with a similar number of effective concepts.
- Asia > China (0.04)
- Oceania > Australia > New South Wales (0.04)
- North America > United States > California (0.04)
- North America > Canada > Ontario > Toronto (0.04)
Towards more holistic interpretability: A lightweight disentangled Concept Bottleneck Model
Huang, Gaoxiang, Lai, Songning, Yue, Yutao
Concept Bottleneck Models (CBMs) enhance interpretability by predicting human-understandable concepts as intermediate representations. However, existing CBMs often suffer from input-to-concept mapping bias and limited controllability, which restricts their practical value, directly damage the responsibility of strategy from concept-based methods. We propose a lightweight Disentangled Concept Bottleneck Model (LDCBM) that automatically groups visual features into semantically meaningful components without region annotation. By introducing a filter grouping loss and joint concept supervision, our method improves the alignment between visual patterns and concepts, enabling more transparent and robust decision-making. Notably, Experiments on three diverse datasets demonstrate that LDCBM achieves higher concept and class accuracy, outperforming previous CBMs in both interpretability and classification performance. By grounding concepts in visual evidence, our method overcomes a fundamental limitation of prior models and enhances the reliability of interpretable AI.
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Europe > Switzerland (0.04)
- Information Technology > Artificial Intelligence > Natural Language (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Vision > Image Understanding (0.34)
CLMN: Concept based Language Models via Neural Symbolic Reasoning
Abstract-- Deep learning's remarkable performance in natural language processing (NLP) faces critical interpretability challenges, particularly in high-stakes domains like healthcare and finance where model transparency is essential. While concept bottleneck models (CBMs) have enhanced interpretabil-ity in computer vision by linking predictions to human-understandable concepts, their adaptation to NLP remains understudied with persistent limitations. Existing approaches either enforce rigid binary concept activations that degrade textual representation quality or obscure semantic interpretability through latent concept embeddings, while failing to capture dynamic concept interactions crucial for understanding linguistic nuances like negation or contextual modification. This paper proposes the C oncept L anguage M odel N etwork (CLMN), a novel neural-symbolic framework that reconciles performance and interpretability through continuous concept embeddings enhanced by fuzzy logic-based reasoning. CLMN addresses the information loss in traditional CBMs by projecting concepts into an interpretable embedding space while preserving human-readable semantics, and introduces adaptive concept interaction modeling through learnable neural-symbolic rules that explicitly represent how concepts influence each other and final predictions. By supplementing original text features with concept-aware representations and enabling automatic derivation of interpretable logic rules, our framework achieves superior performance on multiple NLP benchmarks while providing transparent explanations.